Goto

Collaborating Authors

 ai model


How to Disable Google's Gemini in Chrome

WIRED

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. You might not want to. If you use Google's Chrome browser for desktop, there's probably a Gemini Nano AI model running on your computer right now and taking up about 4 GB of space. That's not necessarily a bad thing, but if you didn't know about it and don't want it, there's a way to turn it off. The file started auto-downloading for Chrome users in 2024 after Google built Gemini Nano into the browser.


Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

WIRED

New research suggests that reliance on AI assistants can have a negative impact on people's ability to think and problem solve. Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people's ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously.


Hackers Hate AI Slop Even More Than You Do

WIRED

Hackers and other cybercriminals are complaining about "AI shit" flooding platforms where they discuss cyberattacks and other illegal activity. "I'm disappointed that you are working to incorporate AI garbage into the site," one annoyed person, posting anonymously, said in an online message. "No-one is asking for this--we want you to improve the site, stop charging for new features." Only, this is not a regular internet user moaning about AI being forced into their favorite app . Instead, they are complaining about a cybercrime forum's plans to introduce more generative AI.


US to safety test new AI models from Google, Microsoft, xAI

BBC News

New artificial intelligence (AI) tools and capabilities from Google, Microsoft and xAI will now be tested by the US Department of Commerce before they are released to the public. The tech firms have agreed to voluntarily submit their models for testing through Commerce's Center for AI Standards and Innovation (CAISI). The new pacts are an expansion on agreements by AI companies like OpenAI and Anthropic that were reached during the Biden Administration, and will see AI models from all of the companies evaluated for their capabilities and security. These expanded industry collaborations help us scale our work in the public interest at a critical moment, CAISI's director Chris Fall said. Overall, the evaluations of the AI tools will cover testing, collaborative research and best practice development related to commercial AI systems.


Backlash builds over NHS plan to hide source code from AI hacking risk

New Scientist

NHS England is pulling its open-source software from the internet because of fears around computer-hacking AI models like Mythos. A decision by NHS England to withdraw open-source code created with UK taxpayer funds because of the risk posed by computer-hacking AI models is attracting growing backlash. Last month, Mythos, an AI created by technology firm Anthropic, was widely reported to be capable of discovering flaws in virtually any software, potentially allowing hackers to break into systems running it. NHS England has now told staff that existing and future software must be pulled from public view and kept behind closed doors by 11 May because of this risk. The decision goes against the NHS service standard, which requires that staff make any software they produce open-source so that tools can be built upon, improved and used without the need for duplicated effort.


Disneyland Now Uses Face Recognition on Visitors

WIRED

Plus: The NSA tests Anthropic's Mythos Preview to find vulnerabilities, a Finnish teen is charged over the Scattered Spider hacking spree, and more. A gunman attempted to enter the White House Correspondents' Dinner in Washington, DC, last weekend, while President Donald Trump, Vice President JD Vance, and other administration officials were in attendance. Media reports and Trump himself quickly identified the suspected shooter as 31-year-old engineer and computer scientist Cole Tomas Allen. The California resident was arrested at the scene on Saturday and appeared Monday in the US District Court for the District of Columbia to face three federal charges: attempting to assassinate the president, transportation of a firearm in interstate commerce, and discharge of a firearm during a crime of violence. The authentication standards body known as the FIDO Alliance announced working groups this week along with Google and Mastercard to develop technical guardrails for validating and protecting transactions initiated by an AI agent .


Pentagon says US military to be an 'AI-first' fighting force

BBC News

Pentagon says US military to be an'AI-first' fighting force The US military plans to increase its use of artificial intelligence (AI) further after the Pentagon agreed to new and expanded contracts with some of the biggest names in technology. Under eight agreements with Google, OpenAI, Amazon, Microsoft, SpaceX, Oracle, Nvidia and the start-up Reflection, the Pentagon said AI technology would now be used for any lawful operational use. These agreements accelerate the transformation [of] the US military as an AI-first fighting force, the Pentagon said. Conspicuous by its absence is Anthropic, as the company has said it is concerned about how the Pentagon could use its tools in warfare and domestically. The firm is now suing the government over the alleged retaliation it faced after refusing to accept any lawful use language in its own contract.


Dangerous New Linux Exploit Gives Attackers Root Access to Countless Computers

WIRED

The exploit, dubbed CopyFail and tracked as CVE-2026-31431, allows hackers to take over PCs and data center servers. The Linux vulnerabilities have been patched--but many machines remain at risk. Publicly released exploit code for an effectively unpatched vulnerability that gives root access to virtually all releases of Linux is setting off alarm bells as defenders scramble to ward off severe compromises inside data centers and on personal devices. The vulnerability and exploit code that exploits it were released Wednesday evening by researchers from security firm Theori, five weeks after privately disclosing it to the Linux kernel security team. The critical flaw, tracked as CVE-2026-31431 and the name CopyFail, is a local privilege escalation, a vulnerability class that allows unprivileged users to elevate themselves to administrators.


You Found Satoshi? Let's See the Receipts

WIRED

Two new projects, including one from a Pulitzer-winning reporter, claim they've solved the mystery of Bitcoin's creator. So why does the hunt continue? In December 2024, at the suggestion of a mutual friend, I met with a professional investigator named Tyler Maroney. He told me he was on a quest to discover the identity of Satoshi Nakamoto, the inventor of Bitcoin, and he felt that he had cracked the case. My first thought was, join the club. Literally dozens of journalists and investigators have spent months or even years trying to uncover the mysterious creator of the most popular cryptocurrency, who ended his (or her or their) online presence in 2011 and amassed around $83 billion in Bitcoin.


NHS England rushes to hide software over AI hacking fears

New Scientist

NHS England is hurriedly withdrawing all the software it has written from public view because of the perceived risk of hacking from cutting-edge artificial intelligence. Security experts say the move is unnecessary and counterproductive. Software produced by the National Health Service has previously been made open-source and listed on GitHub because it is created with public money. This allows other organisations to build upon it and make better services more cheaply without duplicating effort. But NHS England has issued new guidance to staff, which has been shared with, that demands existing and future software be pulled from public view and kept behind closed doors.